Financial Time Series Models (ARCH/GARCH)

Plotting Data

Code
library('quantmod')
getSymbols("UPS", from="2010-01-01", src="yahoo")
[1] "UPS"
Code
head(UPS)
           UPS.Open UPS.High UPS.Low UPS.Close UPS.Volume UPS.Adjusted
2010-01-04    58.18    58.82   57.98     58.18    3897200     37.63985
2010-01-05    58.25    59.00   58.12     58.28    5966300     37.70455
2010-01-06    58.21    58.27   57.81     57.85    5770200     37.42636
2010-01-07    57.96    57.96   57.19     57.41    5747000     37.14171
2010-01-08    59.77    61.13   59.52     60.17   13779300     38.92730
2010-01-11    60.55    63.38   60.50     62.82   13744900     40.64173
Code
#any(is.na(UPS))

ups.close<- Ad(UPS)

# candlestick plot of the UPS prices
chartSeries(UPS, type = "candlesticks",theme='white')

Code
# returns plot
returns_ups = diff(log(ups.close))
chartSeries(returns_ups, theme="white")

The data exhibits non-stationarity and displays volatility clustering in the returns, particularly evident around the period from 2020 to 2022.

Code
library('quantmod')
getSymbols("JBHT", from="2010-01-01", src="yahoo")
[1] "JBHT"
Code
head(JBHT)
           JBHT.Open JBHT.High JBHT.Low JBHT.Close JBHT.Volume JBHT.Adjusted
2010-01-04     32.56     33.11    32.24      33.06     1876000      28.72956
2010-01-05     33.05     33.15    32.46      33.15     2186900      28.80777
2010-01-06     32.97     33.30    32.85      32.95     1147200      28.63397
2010-01-07     32.88     33.26    32.62      33.08     1272400      28.74694
2010-01-08     33.12     34.14    33.12      34.04     2068200      29.58119
2010-01-11     34.04     34.90    33.98      34.54     1897100      30.01570
Code
#any(is.na(UPS))

jbht.close<- Ad(JBHT)

# candlestick plot of the UPS prices
chartSeries(JBHT, type = "candlesticks",theme='white')

Code
# returns plot
returns_jbht = diff(log(jbht.close))
chartSeries(returns_jbht, theme="white")

The data exhibits non-stationarity and displays volatility clustering in the returns, particularly evident around the period from 2020 to 2022.

ACF and PACF Plots

Code
# returns_ups
ggAcf(returns_ups, na.action = na.pass) 

Code
ggPacf(returns_ups, na.action = na.pass) 

The returns are weakly stationary with little instances of high correlations. p=4, q=4. It needs ARIMA model.

Code
ggAcf(returns_ups^2, na.action = na.pass) 

Code
ggPacf(returns_ups^2, na.action = na.pass) 

The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 5. Given this observation, it would be more appropriate to utilize an GARCH model.

Code
# returns_jbht
ggAcf(returns_jbht, na.action = na.pass) 

Code
ggPacf(returns_jbht, na.action = na.pass) 

The returns are weakly stationary with little instances of high correlations. p=1,4, q=1,4,5. It needs ARIMA model.

Code
ggAcf(returns_jbht^2, na.action = na.pass) 

Code
ggPacf(returns_jbht^2, na.action = na.pass) 

The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 5. Given this observation, it would be more appropriate to utilize an GARCH model.

Fitting ARIMA

Code
# returns_ups
log.ups <- log(ups.close)
ggtsdisplay(diff(log.ups))

The model for “differenced log data” series is thus a white noise, and the “original model” resembles random walk model ARIMA(0,1,0).

Code
######################## Check for different combinations ########


d=0
i=1
temp= data.frame()
ls=matrix(rep(NA,6*100),nrow=100) # roughly nrow = 5x5x2


for (p in 1:6)# p=1,2,
{
  for(q in 1:6)# q=1,2,
  {
    for(d in 0:1)# 
    {
      
      if(p-1+d+q-1<=8)
      {
        
        model<- Arima(log.ups,order=c(p-1,d,q-1),include.drift=TRUE) 
        ls[i,]= c(p-1,d,q-1,model$aic,model$bic,model$aicc)
        i=i+1
        #print(i)
        
      }
      
    }
  }
}

temp= as.data.frame(ls)
names(temp)= c("p","d","q","AIC","BIC","AICc")

temp[which.min(temp$AIC),] 
   p d q       AIC       BIC   AICc
63 5 0 3 -20235.07 -20166.99 -20235
Code
temp[which.min(temp$BIC),]
  p d q       AIC       BIC      AICc
2 0 1 0 -20223.76 -20211.38 -20223.76
Code
temp[which.min(temp$AICc),]
   p d q       AIC       BIC   AICc
63 5 0 3 -20235.07 -20166.99 -20235

The lowest AIC model is ARIMA(5,0,3).

Code
# model diagnostics
sarima(log.ups, 0,1,0)
initial  value -4.226790 
iter   1 value -4.226790
final  value -4.226790 
converged
initial  value -4.226790 
iter   1 value -4.226790
final  value -4.226790 
converged
<><><><><><><><><><><><><><>
 
Coefficients: 
         Estimate    SE t.value p.value
constant    4e-04 2e-04  1.5558  0.1198

sigma^2 estimated as 0.0002131361 on 3601 degrees of freedom 
 
AIC = -5.614592  AICc = -5.614592  BIC = -5.611155 
 

Code
sarima(log.ups, 5,0,3)
initial  value -0.791335 
iter   2 value -4.047293
iter   3 value -4.051085
iter   4 value -4.213259
iter   5 value -4.227001
iter   6 value -4.227350
iter   7 value -4.227629
iter   8 value -4.227652
iter   9 value -4.227683
iter  10 value -4.227828
iter  11 value -4.228169
iter  12 value -4.228771
iter  13 value -4.229169
iter  14 value -4.229524
iter  15 value -4.229703
iter  16 value -4.229747
iter  17 value -4.229753
iter  18 value -4.229755
iter  19 value -4.229758
iter  20 value -4.229773
iter  21 value -4.229773
iter  22 value -4.229776
iter  23 value -4.229782
iter  24 value -4.229784
iter  25 value -4.229791
iter  26 value -4.229804
iter  27 value -4.229837
iter  28 value -4.229880
iter  29 value -4.229910
iter  30 value -4.229917
iter  31 value -4.229918
iter  32 value -4.229922
iter  33 value -4.229930
iter  34 value -4.229940
iter  35 value -4.229947
iter  36 value -4.229950
iter  37 value -4.229951
iter  37 value -4.229951
iter  37 value -4.229951
final  value -4.229951 
converged
initial  value -4.227259 
iter   2 value -4.227279
iter   3 value -4.227291
iter   4 value -4.227344
iter   5 value -4.227658
iter   6 value -4.227782
iter   7 value -4.227852
iter   8 value -4.227859
iter   9 value -4.227863
iter  10 value -4.227898
iter  11 value -4.227936
iter  12 value -4.227982
iter  13 value -4.228029
iter  14 value -4.228041
iter  15 value -4.228042
iter  16 value -4.228043
iter  16 value -4.228043
iter  16 value -4.228043
final  value -4.228043 
converged
<><><><><><><><><><><><><><>
 
Coefficients: 
      Estimate     SE t.value p.value
ar1     0.6566 0.0934  7.0334  0.0000
ar2    -0.0843 0.1258 -0.6704  0.5026
ar3    -0.3276 0.1247 -2.6274  0.0086
ar4     0.7026 0.1028  6.8364  0.0000
ar5     0.0519 0.0178  2.9196  0.0035
ma1     0.3287 0.0924  3.5571  0.0004
ma2     0.4277 0.0670  6.3827  0.0000
ma3     0.7537 0.0958  7.8708  0.0000
xmean   4.4820 0.4471 10.0255  0.0000

sigma^2 estimated as 0.0002121721 on 3594 degrees of freedom 
 
AIC = -5.612658  AICc = -5.612644  BIC = -5.595479 
 

According to the model diagnostics, ARIMA(0,1,0) is the better model.

Upon inspecting the Standardized Residuals plot of the model, it is evident that there is still significant variation or volatility remaining. Further modeling is required to address this issue.

Code
# returns_jbht
log.jbht <- log(jbht.close)
ggtsdisplay(diff(log.jbht))

The model for “differenced log data” series is thus a white noise, and the “original model” resembles random walk model ARIMA(0,1,0).

Code
######################## Check for different combinations ########


d=0
i=1
temp= data.frame()
ls=matrix(rep(NA,6*120),nrow=120) # roughly nrow = 5x5x2


for (p in 1:4)# p=1,2,
{
  for(q in 1:4)# q=1,2,
  {
    for(d in 0:1)# 
    {
      
      if(p-1+d+q-1<=8)
      {
        
        model<- Arima(log.jbht,order=c(p-1,d,q-1),include.drift=TRUE) 
        ls[i,]= c(p-1,d,q-1,model$aic,model$bic,model$aicc)
        i=i+1
        #print(i)
        
      }
      
    }
  }
}

temp= as.data.frame(ls)
names(temp)= c("p","d","q","AIC","BIC","AICc")

temp[which.min(temp$AIC),] 
   p d q       AIC       BIC     AICc
29 3 0 2 -19376.74 -19327.23 -19376.7
Code
temp[which.min(temp$BIC),]
  p d q       AIC       BIC      AICc
2 0 1 0 -19352.54 -19340.16 -19352.54
Code
temp[which.min(temp$AICc),]
   p d q       AIC       BIC     AICc
29 3 0 2 -19376.74 -19327.23 -19376.7

The lowest AIC model is ARIMA(3,0,2).

Code
# model diagnostics
sarima(log.jbht, 0,1,0)
initial  value -4.105854 
iter   1 value -4.105854
final  value -4.105854 
converged
initial  value -4.105854 
iter   1 value -4.105854
final  value -4.105854 
converged
<><><><><><><><><><><><><><>
 
Coefficients: 
         Estimate    SE t.value p.value
constant    5e-04 3e-04  1.7461  0.0809

sigma^2 estimated as 0.0002714567 on 3601 degrees of freedom 
 
AIC = -5.372721  AICc = -5.37272  BIC = -5.369284 
 

Code
sarima(log.jbht, 3,0,2)
initial  value -0.617691 
iter   2 value -0.655217
iter   3 value -0.826342
iter   4 value -1.686839
iter   5 value -2.481529
iter   6 value -3.329225
iter   7 value -3.687963
iter   8 value -3.838591
iter   9 value -4.010884
iter  10 value -4.071998
iter  11 value -4.090821
iter  12 value -4.104386
iter  13 value -4.105705
iter  14 value -4.105798
iter  15 value -4.105801
iter  16 value -4.105802
iter  17 value -4.105821
iter  18 value -4.105856
iter  19 value -4.105931
iter  20 value -4.106019
iter  21 value -4.106067
iter  22 value -4.106076
iter  23 value -4.106078
iter  24 value -4.106084
iter  25 value -4.106299
iter  26 value -4.106407
iter  27 value -4.107417
iter  28 value -4.107432
iter  29 value -4.107436
iter  30 value -4.107503
iter  31 value -4.107507
iter  32 value -4.107510
iter  32 value -4.107510
final  value -4.107510 
converged
initial  value -4.106135 
iter   2 value -4.106138
iter   3 value -4.106168
iter   4 value -4.106482
iter   5 value -4.106639
iter   6 value -4.106712
iter   7 value -4.106727
iter   8 value -4.106737
iter   9 value -4.106741
iter  10 value -4.106761
iter  11 value -4.106799
iter  12 value -4.106864
iter  13 value -4.107027
iter  14 value -4.107143
iter  15 value -4.107304
iter  16 value -4.107306
iter  17 value -4.107383
iter  18 value -4.107387
iter  19 value -4.107419
iter  20 value -4.107425
iter  21 value -4.107428
iter  22 value -4.107431
iter  23 value -4.107456
iter  24 value -4.107467
iter  25 value -4.107499
iter  26 value -4.107538
iter  27 value -4.107573
iter  28 value -4.107575
iter  29 value -4.107576
iter  30 value -4.107577
iter  31 value -4.107578
iter  32 value -4.107581
iter  33 value -4.107583
iter  34 value -4.107584
iter  35 value -4.107584
iter  36 value -4.107584
iter  37 value -4.107584
iter  38 value -4.107585
iter  39 value -4.107587
iter  40 value -4.107592
iter  41 value -4.107594
iter  42 value -4.107596
iter  43 value -4.107597
iter  43 value -4.107597
iter  43 value -4.107597
final  value -4.107597 
converged
<><><><><><><><><><><><><><>
 
Coefficients: 
      Estimate     SE t.value p.value
ar1    -0.6404 0.0703 -9.1152   0e+00
ar2     0.8330 0.0343 24.2844   0e+00
ar3     0.8071 0.0775 10.4137   0e+00
ma1     1.6085 0.0759 21.1901   0e+00
ma2     0.7635 0.0835  9.1443   0e+00
xmean   4.4604 1.2830  3.4766   5e-04

sigma^2 estimated as 0.0002698608 on 3597 degrees of freedom 
 
AIC = -5.373432  AICc = -5.373425  BIC = -5.361407 
 

According to the model diagnostics, ARIMA(3,0,2) is the better model.

Upon inspecting the Standardized Residuals plot of the model, it is evident that there is still significant variation or volatility remaining. Further modeling is required to address this issue.

GARCH Model on ARIMA Residuals

Code
# returns_ups
fit.ups <- Arima(log.ups, order=c(0,1,0))
summary(fit.ups)
Series: log.ups 
ARIMA(0,1,0) 

sigma^2 = 0.0002133:  log likelihood = 10112.66
AIC=-20223.33   AICc=-20223.33   BIC=-20217.14

Training set error measures:
                       ME       RMSE         MAE         MPE      MAPE
Training set 0.0003802395 0.01460221 0.009702943 0.008376692 0.2163833
                  MASE        ACF1
Training set 0.9998262 -0.01543624
Code
res.ups <- fit.ups$res
ggtsdisplay(res.ups^2)

The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 6. Given this observation, it would be more appropriate to utilize an GARCH model.

Code
model <- list() ## set counter
cc <- 1
for (p in 1:6) {
  for (q in 1:6) {
  
model[[cc]] <- garch(res.ups,order=c(q,p),trace=F)
cc <- cc + 1
}
} 

## get AIC values for model evaluation
GARCH_AIC <- sapply(model, AIC) ## model with lowest AIC is the best
which(GARCH_AIC == min(GARCH_AIC))
[1] 5
Code
model[[which(GARCH_AIC == min(GARCH_AIC))]]

Call:
garch(x = res.ups, order = c(q, p), trace = F)

Coefficient(s):
       a0         a1         b1         b2         b3         b4         b5  
4.775e-06  1.446e-01  4.215e-02  9.762e-10  1.616e-01  5.015e-01  1.454e-01  
Code
library(fGarch)
summary(garchFit(~garch(1,5), res.ups,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 5), data = res.ups, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 5)
<environment: 0x13c92b708>
 [data = res.ups]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1       beta1       beta2       beta3  
3.3936e-04  2.1047e-06  6.5174e-02  1.1923e-01  1.0000e-08  1.7235e-01  
     beta4       beta5  
5.0929e-01  1.2591e-01  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     3.394e-04   2.045e-04    1.659 0.097083 .  
omega  2.105e-06   6.303e-07    3.339 0.000841 ***
alpha1 6.517e-02   1.020e-02    6.390 1.66e-10 ***
beta1  1.192e-01   1.102e-01    1.082 0.279175    
beta2  1.000e-08   9.061e-02    0.000 1.000000    
beta3  1.724e-01   1.159e-01    1.487 0.136939    
beta4  5.093e-01   1.397e-01    3.645 0.000268 ***
beta5  1.259e-01   1.165e-01    1.081 0.279713    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 10414.24    normalized:  2.890437 

Description:
 Sat Apr 27 18:32:03 2024 by user:  


Standardised Residuals Tests:
                                   Statistic   p-Value
 Jarque-Bera Test   R    Chi^2  1.697157e+04 0.0000000
 Shapiro-Wilk Test  R    W      9.090323e-01 0.0000000
 Ljung-Box Test     R    Q(10)  1.269500e+01 0.2412279
 Ljung-Box Test     R    Q(15)  2.192269e+01 0.1098472
 Ljung-Box Test     R    Q(20)  2.541226e+01 0.1861119
 Ljung-Box Test     R^2  Q(10)  5.758875e+00 0.8350983
 Ljung-Box Test     R^2  Q(15)  6.634872e+00 0.9670016
 Ljung-Box Test     R^2  Q(20)  8.035491e+00 0.9916303
 LM Arch Test       R    TR^2   6.026321e+00 0.9147494

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-5.776433 -5.762690 -5.776443 -5.771536 

Alpha2 and beta1,2,3,5 are not significant. So I will try GARCH(1,4) and ARCH(1).

Code
summary(garchFit(~garch(1,4), res.ups,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 4), data = res.ups, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 4)
<environment: 0x139cee720>
 [data = res.ups]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1       beta1       beta2       beta3  
3.3642e-04  1.9620e-06  6.0924e-02  1.5463e-01  1.0000e-08  2.3361e-01  
     beta4  
5.4340e-01  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     3.364e-04   2.045e-04    1.645 0.099981 .  
omega  1.962e-06   5.720e-07    3.430 0.000604 ***
alpha1 6.092e-02   9.206e-03    6.618 3.65e-11 ***
beta1  1.546e-01   1.178e-01    1.313 0.189252    
beta2  1.000e-08   1.021e-01    0.000 1.000000    
beta3  2.336e-01   1.594e-01    1.466 0.142684    
beta4  5.434e-01   1.104e-01    4.921 8.62e-07 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 10413.6    normalized:  2.890258 

Description:
 Sat Apr 27 18:32:03 2024 by user:  


Standardised Residuals Tests:
                                   Statistic   p-Value
 Jarque-Bera Test   R    Chi^2  1.742714e+04 0.0000000
 Shapiro-Wilk Test  R    W      9.084315e-01 0.0000000
 Ljung-Box Test     R    Q(10)  1.277628e+01 0.2364505
 Ljung-Box Test     R    Q(15)  2.192937e+01 0.1096694
 Ljung-Box Test     R    Q(20)  2.554784e+01 0.1812697
 Ljung-Box Test     R^2  Q(10)  5.209197e+00 0.8767724
 Ljung-Box Test     R^2  Q(15)  6.245968e+00 0.9753132
 Ljung-Box Test     R^2  Q(20)  7.690850e+00 0.9937235
 LM Arch Test       R    TR^2   5.539211e+00 0.9375089

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-5.776631 -5.764606 -5.776639 -5.772346 
Code
summary(garchFit(~garch(1,0), res.ups,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 0), data = res.ups, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 0)
<environment: 0x139fa2ec8>
 [data = res.ups]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1  
0.00039470  0.00015123  0.35420618  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     3.947e-04   2.151e-04    1.835   0.0665 .  
omega  1.512e-04   4.891e-06   30.919   <2e-16 ***
alpha1 3.542e-01   3.651e-02    9.702   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 10270.14    normalized:  2.850441 

Description:
 Sat Apr 27 18:32:03 2024 by user:  


Standardised Residuals Tests:
                                   Statistic     p-Value
 Jarque-Bera Test   R    Chi^2  1.160903e+04 0.000000000
 Shapiro-Wilk Test  R    W      9.105667e-01 0.000000000
 Ljung-Box Test     R    Q(10)  2.371606e+01 0.008390795
 Ljung-Box Test     R    Q(15)  3.383041e+01 0.003597032
 Ljung-Box Test     R    Q(20)  3.649039e+01 0.013460494
 Ljung-Box Test     R^2  Q(10)  2.670228e+01 0.002902080
 Ljung-Box Test     R^2  Q(15)  3.533542e+01 0.002202416
 Ljung-Box Test     R^2  Q(20)  4.522113e+01 0.001029758
 LM Arch Test       R    TR^2   2.724930e+01 0.007113096

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-5.699216 -5.694062 -5.699217 -5.697379 

The best model above with the smallest AIC is GARCH(1,4). So for UPS stock price the best model is ARIMA(0,1,0)+GARCH(1,4).

Code
# returns_jbht
fit.jbht <- Arima(log.jbht, order=c(3,0,2))
summary(fit.jbht)
Series: log.jbht 
ARIMA(3,0,2) with non-zero mean 

Coefficients:
          ar1     ar2     ar3     ma1     ma2    mean
      -0.6404  0.8330  0.8071  1.6085  0.7635  4.4604
s.e.   0.0703  0.0343  0.0775  0.0759  0.0835  1.2830

sigma^2 = 0.0002703:  log likelihood = 9687.24
AIC=-19360.47   AICc=-19360.44   BIC=-19317.15

Training set error measures:
                       ME       RMSE        MAE        MPE      MAPE      MASE
Training set 0.0004864017 0.01642744 0.01182566 0.01080249 0.2688679 0.9994574
                      ACF1
Training set -0.0006273293
Code
res.jbht <- fit.jbht$res
ggtsdisplay(res.jbht^2)

The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 6. Given this observation, it would be more appropriate to utilize an GARCH model.

Code
model <- list() ## set counter
cc <- 1
for (p in 1:6) {
  for (q in 1:6) {
  
model[[cc]] <- garch(res.jbht,order=c(q,p),trace=F)
cc <- cc + 1
}
} 

## get AIC values for model evaluation
GARCH_AIC <- sapply(model, AIC) ## model with lowest AIC is the best
which(GARCH_AIC == min(GARCH_AIC))
[1] 1
Code
model[[which(GARCH_AIC == min(GARCH_AIC))]]

Call:
garch(x = res.jbht, order = c(q, p), trace = F)

Coefficient(s):
       a0         a1         b1  
5.492e-06  4.373e-02  9.357e-01  
Code
library(fGarch)
summary(garchFit(~garch(1,1), res.jbht,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 1), data = res.jbht, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 1)
<environment: 0x139e5d890>
 [data = res.jbht]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1       beta1  
5.0221e-04  5.4495e-06  4.3369e-02  9.3621e-01  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     5.022e-04   2.475e-04    2.029   0.0424 *  
omega  5.450e-06   1.384e-06    3.938 8.22e-05 ***
alpha1 4.337e-02   6.441e-03    6.733 1.66e-11 ***
beta1  9.362e-01   1.014e-02   92.289  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 9882.896    normalized:  2.742963 

Description:
 Sat Apr 27 18:32:04 2024 by user:  


Standardised Residuals Tests:
                                   Statistic   p-Value
 Jarque-Bera Test   R    Chi^2  1773.2943427 0.0000000
 Shapiro-Wilk Test  R    W         0.9706826 0.0000000
 Ljung-Box Test     R    Q(10)     6.8524567 0.7392967
 Ljung-Box Test     R    Q(15)    10.4572844 0.7900329
 Ljung-Box Test     R    Q(20)    11.9815491 0.9167096
 Ljung-Box Test     R^2  Q(10)     4.7732784 0.9057995
 Ljung-Box Test     R^2  Q(15)    10.5959357 0.7806767
 Ljung-Box Test     R^2  Q(20)    16.3539424 0.6944377
 LM Arch Test       R    TR^2     10.4428505 0.5771698

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-5.483706 -5.476834 -5.483708 -5.481257 

The best model above with the smallest AIC is GARCH(1,1). So for JBHT stock price the best model is ARIMA(3,0,2)+GARCH(1,1).

Final Model

Best model: ARIMA(0,1,0)+GARCH(1,4)

Code
# returns_ups
summary(fit.ups)
Series: log.ups 
ARIMA(0,1,0) 

sigma^2 = 0.0002133:  log likelihood = 10112.66
AIC=-20223.33   AICc=-20223.33   BIC=-20217.14

Training set error measures:
                       ME       RMSE         MAE         MPE      MAPE
Training set 0.0003802395 0.01460221 0.009702943 0.008376692 0.2163833
                  MASE        ACF1
Training set 0.9998262 -0.01543624
Code
summary(garchFit(~garch(1,4), res.ups,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 4), data = res.ups, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 4)
<environment: 0x14d8f3650>
 [data = res.ups]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1       beta1       beta2       beta3  
3.3642e-04  1.9620e-06  6.0924e-02  1.5463e-01  1.0000e-08  2.3361e-01  
     beta4  
5.4340e-01  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     3.364e-04   2.045e-04    1.645 0.099981 .  
omega  1.962e-06   5.720e-07    3.430 0.000604 ***
alpha1 6.092e-02   9.206e-03    6.618 3.65e-11 ***
beta1  1.546e-01   1.178e-01    1.313 0.189252    
beta2  1.000e-08   1.021e-01    0.000 1.000000    
beta3  2.336e-01   1.594e-01    1.466 0.142684    
beta4  5.434e-01   1.104e-01    4.921 8.62e-07 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 10413.6    normalized:  2.890258 

Description:
 Sat Apr 27 18:32:04 2024 by user:  


Standardised Residuals Tests:
                                   Statistic   p-Value
 Jarque-Bera Test   R    Chi^2  1.742714e+04 0.0000000
 Shapiro-Wilk Test  R    W      9.084315e-01 0.0000000
 Ljung-Box Test     R    Q(10)  1.277628e+01 0.2364505
 Ljung-Box Test     R    Q(15)  2.192937e+01 0.1096694
 Ljung-Box Test     R    Q(20)  2.554784e+01 0.1812697
 Ljung-Box Test     R^2  Q(10)  5.209197e+00 0.8767724
 Ljung-Box Test     R^2  Q(15)  6.245968e+00 0.9753132
 Ljung-Box Test     R^2  Q(20)  7.690850e+00 0.9937235
 LM Arch Test       R    TR^2   5.539211e+00 0.9375089

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-5.776631 -5.764606 -5.776639 -5.772346 
Code
checkresiduals(garch(res.jbht, order = c(4,1),trace = F))


    Ljung-Box test

data:  Residuals
Q* = 7.0853, df = 10, p-value = 0.7174

Model df: 0.   Total lags used: 10

The model’s residual plots generally look satisfactory, with only a few notable lags in the ACF plots. The AIC values are relatively low, indicating a good fit. However, it’s worth noting that not all coefficients for the GARCH model were statistically significant, suggesting that some correlations may not be entirely captured by the model. Additionally, the Ljung-Box test results show all p-values above 0.05, indicating that there may not be significant autocorrelation remaining in the residuals.

Best model: ARIMA(3,0,2)+GARCH(1,1)

Code
# returns_jbht
summary(fit.jbht)
Series: log.jbht 
ARIMA(3,0,2) with non-zero mean 

Coefficients:
          ar1     ar2     ar3     ma1     ma2    mean
      -0.6404  0.8330  0.8071  1.6085  0.7635  4.4604
s.e.   0.0703  0.0343  0.0775  0.0759  0.0835  1.2830

sigma^2 = 0.0002703:  log likelihood = 9687.24
AIC=-19360.47   AICc=-19360.44   BIC=-19317.15

Training set error measures:
                       ME       RMSE        MAE        MPE      MAPE      MASE
Training set 0.0004864017 0.01642744 0.01182566 0.01080249 0.2688679 0.9994574
                      ACF1
Training set -0.0006273293
Code
summary(garchFit(~garch(1,1), res.jbht,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 1), data = res.jbht, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 1)
<environment: 0x14d42f0e8>
 [data = res.jbht]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1       beta1  
5.0221e-04  5.4495e-06  4.3369e-02  9.3621e-01  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     5.022e-04   2.475e-04    2.029   0.0424 *  
omega  5.450e-06   1.384e-06    3.938 8.22e-05 ***
alpha1 4.337e-02   6.441e-03    6.733 1.66e-11 ***
beta1  9.362e-01   1.014e-02   92.289  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 9882.896    normalized:  2.742963 

Description:
 Sat Apr 27 18:32:05 2024 by user:  


Standardised Residuals Tests:
                                   Statistic   p-Value
 Jarque-Bera Test   R    Chi^2  1773.2943427 0.0000000
 Shapiro-Wilk Test  R    W         0.9706826 0.0000000
 Ljung-Box Test     R    Q(10)     6.8524567 0.7392967
 Ljung-Box Test     R    Q(15)    10.4572844 0.7900329
 Ljung-Box Test     R    Q(20)    11.9815491 0.9167096
 Ljung-Box Test     R^2  Q(10)     4.7732784 0.9057995
 Ljung-Box Test     R^2  Q(15)    10.5959357 0.7806767
 Ljung-Box Test     R^2  Q(20)    16.3539424 0.6944377
 LM Arch Test       R    TR^2     10.4428505 0.5771698

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-5.483706 -5.476834 -5.483708 -5.481257 
Code
checkresiduals(garch(res.jbht, order = c(1,1),trace = F))


    Ljung-Box test

data:  Residuals
Q* = 6.7644, df = 10, p-value = 0.7475

Model df: 0.   Total lags used: 10

The model’s residual plots generally look satisfactory, with only a few notable lags in the ACF plots. The AIC values are relatively low, indicating a good fit. However, it’s worth noting that not all coefficients for the ARIMA model were statistically significant, suggesting that some correlations may not be entirely captured by the model. Additionally, the Ljung-Box test results show all p-values above 0.05, indicating that there may not be significant autocorrelation remaining in the residuals.

Model Equations

\(\operatorname{ARIMA}(0,1,0)\)

\[ \left(1-B\right) Y_t=\epsilon_t \] \(\operatorname{GARCH}(1,4)\) \[ \sigma_t^2=\omega+\alpha_1 \varepsilon_{t-1}^2+\beta_1 \sigma_{t-1}^2+\beta_2 \sigma_{t-2}^2+\beta_3 \sigma_{t-3}^3+\beta_4 \sigma_{t-4}^4 \]

\(\operatorname{ARIMA}(3,0,2)\)

\[ \left(1-\phi_1 B-\phi_2 B^2-\phi_3 B^3\right) Y_t=\left(1+\theta_1 B+\theta_2 B^2\right) \epsilon_t \] \(\operatorname{GARCH}(1,1)\)

\[ \sigma_t^2=\omega+\alpha_1 \varepsilon_{t-1}^2+\beta_1 \sigma_{t-1}^2 \]

Back to top